238 research outputs found

    Energy Confused Adversarial Metric Learning for Zero-Shot Image Retrieval and Clustering

    Full text link
    Deep metric learning has been widely applied in many computer vision tasks, and recently, it is more attractive in \emph{zero-shot image retrieval and clustering}(ZSRC) where a good embedding is requested such that the unseen classes can be distinguished well. Most existing works deem this 'good' embedding just to be the discriminative one and thus race to devise powerful metric objectives or hard-sample mining strategies for leaning discriminative embedding. However, in this paper, we first emphasize that the generalization ability is a core ingredient of this 'good' embedding as well and largely affects the metric performance in zero-shot settings as a matter of fact. Then, we propose the Energy Confused Adversarial Metric Learning(ECAML) framework to explicitly optimize a robust metric. It is mainly achieved by introducing an interesting Energy Confusion regularization term, which daringly breaks away from the traditional metric learning idea of discriminative objective devising, and seeks to 'confuse' the learned model so as to encourage its generalization ability by reducing overfitting on the seen classes. We train this confusion term together with the conventional metric objective in an adversarial manner. Although it seems weird to 'confuse' the network, we show that our ECAML indeed serves as an efficient regularization technique for metric learning and is applicable to various conventional metric methods. This paper empirically and experimentally demonstrates the importance of learning embedding with good generalization, achieving state-of-the-art performances on the popular CUB, CARS, Stanford Online Products and In-Shop datasets for ZSRC tasks. \textcolor[rgb]{1, 0, 0}{Code available at http://www.bhchen.cn/}.Comment: AAAI 2019, Spotligh

    Complexity of Equilibria in First-Price Auctions under General Tie-Breaking Rules

    Full text link
    We study the complexity of finding an approximate (pure) Bayesian Nash equilibrium in a first-price auction with common priors when the tie-breaking rule is part of the input. We show that the problem is PPAD-complete even when the tie-breaking rule is trilateral (i.e., it specifies item allocations when no more than three bidders are in tie, and adopts the uniform tie-breaking rule otherwise). This is the first hardness result for equilibrium computation in first-price auctions with common priors. On the positive side, we give a PTAS for the problem under the uniform tie-breaking rule

    On Adaptivity Gaps of Influence Maximization Under the Independent Cascade Model with Full-Adoption Feedback

    Get PDF
    In this paper, we study the adaptivity gap of the influence maximization problem under the independent cascade model when full-adoption feedback is available. Our main results are to derive upper bounds on several families of well-studied influence graphs, including in-arborescences, out-arborescences and bipartite graphs. Especially, we prove that the adaptivity gap for the in-arborescences is between [e/(e-1), 2e/(e-1)], and for the out-arborescences the gap is between [e/(e-1), 2]. These are the first constant upper bounds in the full-adoption feedback model. Our analysis provides several novel ideas to tackle the correlated feedback appearing in adaptive stochastic optimization, which may be of independent interest

    Memory-Query Tradeoffs for Randomized Convex Optimization

    Full text link
    We show that any randomized first-order algorithm which minimizes a dd-dimensional, 11-Lipschitz convex function over the unit ball must either use Ω(d2−δ)\Omega(d^{2-\delta}) bits of memory or make Ω(d1+δ/6−o(1))\Omega(d^{1+\delta/6-o(1)}) queries, for any constant δ∈(0,1)\delta\in (0,1) and when the precision ϵ\epsilon is quasipolynomially small in dd. Our result implies that cutting plane methods, which use O~(d2)\tilde{O}(d^2) bits of memory and O~(d)\tilde{O}(d) queries, are Pareto-optimal among randomized first-order algorithms, and quadratic memory is required to achieve optimal query complexity for convex optimization
    • …
    corecore